Národní úložiště šedé literatury Nalezeno 7 záznamů.  Hledání trvalo 0.01 vteřin. 
Extensions to Probabilistic Linear Discriminant Analysis for Speaker Recognition
Plchot, Oldřich ; Fousek, Petr (oponent) ; McCree,, Alan (oponent) ; Burget, Lukáš (vedoucí práce)
This thesis deals with probabilistic models for automatic speaker verification. In particular, the Probabilistic Linear Discriminant Analysis (PLDA) model, which models i--vector representation of speech utterances, is analyzed in detail. The thesis proposes extensions to the standard state-of-the-art PLDA model. The newly proposed Full Posterior Distribution PLDA  models the uncertainty associated with the i--vector generation process. A new discriminative approach to training the speaker verification system based on the~PLDA model is also proposed. When comparing the original PLDA with the model extended by considering the i--vector uncertainty, results obtained with the extended model show up to 20% relative improvement on tests with short segments of speech. As the test segments get longer (more than one minute), the performance gain of the extended model is lower, but it is never worse than the baseline. Training data are, however, usually  available in the form of segments which are sufficiently long and therefore, in such cases, there is no gain from using the extended model  for training. Instead, the training can be performed with the original PLDA model and the extended model can be used if the task is to test on the short segments. The discriminative classifier is based on classifying pairs of i--vectors into two classes representing target and non-target trials. The functional form for obtaining the score for every i--vector pair is derived from the  PLDA model and training is based on the logistic regression minimizing  the cross-entropy error function  between the correct labeling of all trials and the probabilistic labeling proposed by the system. The results obtained with discriminatively trained system are similar to those obtained with generative baseline, but the discriminative approach shows the ability to output better calibrated scores. This property leads to a  better actual verification performance on an unseen evaluation set, which is an important feature for real use scenarios.
Improving Robustness of Speaker Recognition using Discriminative Techniques
Novotný, Ondřej ; Ferrer, Luciana (oponent) ; Pollák, Petr (oponent) ; Černocký, Jan (vedoucí práce)
This work deals with discriminative techniques in speaker verification systems to improve robustness of the systems against factors that negatively affect their performance. These factors include noise, reverberation, or the transmission channel. The thesis consists of two main parts. In the first part, it deals with a theoretical introduction to current state-of-the-art speaker verification systems. The recognition system's steps are described, starting from the extraction of acoustic features, the extraction of vector representations of recordings, and the final recognition score computation. Particular emphasis is paid to the techniques of extraction of a vector representation of a recording, where we describe two different paradigms: the i-vectors and the x-vectors. The second part of the work focuses more on discriminative techniques to increase robustness. Their description is organized to match the gradual passage of the recording through the verification system. First, attention is paid to signal pre-processing using a neural network for noise reduction and speech enhancement. This pre-processing is a universal technique independent of the verification system. The work follows by focusing on the use of a discriminative approach in the extraction of features and the extraction of vector representations of recordings. Furthermore, this work sheds light on the transition from generative systems to discriminative systems. In order to give a fuller context, the work also describes techniques that had historically preceded this transition. All presented techniques are always experimentally verified and their advantages evaluated. We are proposing several techniques that have proved successful in both the generative approach in the form of i-vectors and discriminative x-vectors, and thanks to them, considerable improvement has been achieved. For completeness, in the field of robustness, other techniques are included in the work, such as normalization of scores or multi-condition training. Finally, the work deals with the robustness of discriminative systems in terms of data used in their training.
Optimization of Gaussian Mixture Subspace Models and Related Scoring Algorithms in Speaker Verification
Glembek, Ondřej ; Brummer, Niko (oponent) ; Campbell,, William (oponent) ; Burget, Lukáš (vedoucí práce)
This thesis deals with Gaussian Mixture Subspace Modeling in automatic speaker recognition. The thesis consists of three parts.  In the first part, Joint Factor Analysis (JFA) scoring methods are studied.  The methods differ mainly in how they deal with the channel of the tested utterance.  The general JFA likelihood function is investigated and the methods are compared both in terms of accuracy and speed.  It was found that linear approximation of the log-likelihood function gives comparable results to the full log-likelihood evaluation while simplyfing the formula and dramatically reducing the computation speed. In the second part, i-vector extraction is studied and two simplification methods are proposed. The motivation for this part was to allow for using the state-of-the-art technique on small scale devices and to setup a simple discriminative-training system.  It is shown that, for long utterances, while sacrificing the accuracy, we can get very fast and compact i-vector systems. On a short-utterance(5-second) task, the results of the simplified systems are comparable to the full i-vector extraction. The third part deals with discriminative training in automatic speaker recognition.  Previous work in the field is summarized and---based on the knowledge from the earlier chapters of this work---discriminative training of the i-vector extractor parameters is proposed.  It is shown that discriminative re-training of the i-vector extractor can improve the system if the initial estimation is computed using the generative approach.
Exploiting Uncertainty Information in Speaker Verification and Diarization
Silnova, Anna ; Šmídl, Václav (oponent) ; Villalba Lopez, Jesus Antonio (oponent) ; Burget, Lukáš (vedoucí práce)
This thesis considers two models allowing to utilize uncertainty information in the tasks of Automatic Speaker Verification and Speaker Diarization. The first model we consider is a modification of the widely-used Gaussian Probabilistic Linear Discriminant Analysis (G-PLDA) that models the distribution of the vector utterance representations called embeddings. In G-PLDA, the embeddings are assumed to be generated by adding a noise vector sampled from a Gaussian distribution to a speakerdependent vector. We show that when assuming that the noise was instead sampled from a Student's T-distribution, the PLDA model (we call this version heavy-tailed PLDA) can use the uncertainty information when making the verification decisions. Our model is conceptually similar to the HT-PLDA model defined by Kenny et al. in 2010, but, as we show in this thesis, it allows for fast scoring, while the original HT-PLDA definition requires considerable time and computation resources for scoring. We present the algorithm to train our version of HT-PLDA as a generative model. Also, we consider various strategies for discriminatively training the parameters of the model. We test the performance of generatively and discriminatively trained HT-PLDA on the speaker verification task. The results indicate that HT-PLDA performs on par with the standard G-PLDA while having the advantage of being more robust against variations in the data pre-processing. Experiments on the speaker diarization demonstrate that the HT-PLDA model not only provides better performance than the G-PLDA baseline model but also has the advantage of producing better-calibrated Log-Likelihood Ratio (LLR) scores. In the second model, unlike in HT-PLDA, we do not consider the embeddings as the observed data. Instead, in this model, the embeddings are normally distributed hidden variables. The embedding precision carries the information about the quality of the speech segment: for clean long segments, the precision should be high, and for short and noisy utterances, it should be low. We show how such probabilistic embeddings can be incorporated into the G-PLDA framework and how the parameters of the hidden embedding influence its impact when computing the likelihood with this model. In the experiments, we demonstrate how to utilize an existing neural network (NN) embedding extractor to provide not embeddings but parameters of probabilistic embedding distribution. We test the performance of the probabilistic embeddings model on the speaker diarization task. The results demonstrate that this model provides well-calibrated LLR scores allowing for better diarization when no development dataset is available to tune the clustering algorithm.
Improving Robustness of Speaker Recognition using Discriminative Techniques
Novotný, Ondřej ; Ferrer, Luciana (oponent) ; Pollák, Petr (oponent) ; Černocký, Jan (vedoucí práce)
This work deals with discriminative techniques in speaker verification systems to improve robustness of the systems against factors that negatively affect their performance. These factors include noise, reverberation, or the transmission channel. The thesis consists of two main parts. In the first part, it deals with a theoretical introduction to current state-of-the-art speaker verification systems. The recognition system's steps are described, starting from the extraction of acoustic features, the extraction of vector representations of recordings, and the final recognition score computation. Particular emphasis is paid to the techniques of extraction of a vector representation of a recording, where we describe two different paradigms: the i-vectors and the x-vectors. The second part of the work focuses more on discriminative techniques to increase robustness. Their description is organized to match the gradual passage of the recording through the verification system. First, attention is paid to signal pre-processing using a neural network for noise reduction and speech enhancement. This pre-processing is a universal technique independent of the verification system. The work follows by focusing on the use of a discriminative approach in the extraction of features and the extraction of vector representations of recordings. Furthermore, this work sheds light on the transition from generative systems to discriminative systems. In order to give a fuller context, the work also describes techniques that had historically preceded this transition. All presented techniques are always experimentally verified and their advantages evaluated. We are proposing several techniques that have proved successful in both the generative approach in the form of i-vectors and discriminative x-vectors, and thanks to them, considerable improvement has been achieved. For completeness, in the field of robustness, other techniques are included in the work, such as normalization of scores or multi-condition training. Finally, the work deals with the robustness of discriminative systems in terms of data used in their training.
Optimization of Gaussian Mixture Subspace Models and Related Scoring Algorithms in Speaker Verification
Glembek, Ondřej ; Brummer, Niko (oponent) ; Campbell,, William (oponent) ; Burget, Lukáš (vedoucí práce)
This thesis deals with Gaussian Mixture Subspace Modeling in automatic speaker recognition. The thesis consists of three parts.  In the first part, Joint Factor Analysis (JFA) scoring methods are studied.  The methods differ mainly in how they deal with the channel of the tested utterance.  The general JFA likelihood function is investigated and the methods are compared both in terms of accuracy and speed.  It was found that linear approximation of the log-likelihood function gives comparable results to the full log-likelihood evaluation while simplyfing the formula and dramatically reducing the computation speed. In the second part, i-vector extraction is studied and two simplification methods are proposed. The motivation for this part was to allow for using the state-of-the-art technique on small scale devices and to setup a simple discriminative-training system.  It is shown that, for long utterances, while sacrificing the accuracy, we can get very fast and compact i-vector systems. On a short-utterance(5-second) task, the results of the simplified systems are comparable to the full i-vector extraction. The third part deals with discriminative training in automatic speaker recognition.  Previous work in the field is summarized and---based on the knowledge from the earlier chapters of this work---discriminative training of the i-vector extractor parameters is proposed.  It is shown that discriminative re-training of the i-vector extractor can improve the system if the initial estimation is computed using the generative approach.
Extensions to Probabilistic Linear Discriminant Analysis for Speaker Recognition
Plchot, Oldřich ; Fousek, Petr (oponent) ; McCree,, Alan (oponent) ; Burget, Lukáš (vedoucí práce)
This thesis deals with probabilistic models for automatic speaker verification. In particular, the Probabilistic Linear Discriminant Analysis (PLDA) model, which models i--vector representation of speech utterances, is analyzed in detail. The thesis proposes extensions to the standard state-of-the-art PLDA model. The newly proposed Full Posterior Distribution PLDA  models the uncertainty associated with the i--vector generation process. A new discriminative approach to training the speaker verification system based on the~PLDA model is also proposed. When comparing the original PLDA with the model extended by considering the i--vector uncertainty, results obtained with the extended model show up to 20% relative improvement on tests with short segments of speech. As the test segments get longer (more than one minute), the performance gain of the extended model is lower, but it is never worse than the baseline. Training data are, however, usually  available in the form of segments which are sufficiently long and therefore, in such cases, there is no gain from using the extended model  for training. Instead, the training can be performed with the original PLDA model and the extended model can be used if the task is to test on the short segments. The discriminative classifier is based on classifying pairs of i--vectors into two classes representing target and non-target trials. The functional form for obtaining the score for every i--vector pair is derived from the  PLDA model and training is based on the logistic regression minimizing  the cross-entropy error function  between the correct labeling of all trials and the probabilistic labeling proposed by the system. The results obtained with discriminatively trained system are similar to those obtained with generative baseline, but the discriminative approach shows the ability to output better calibrated scores. This property leads to a  better actual verification performance on an unseen evaluation set, which is an important feature for real use scenarios.

Chcete být upozorněni, pokud se objeví nové záznamy odpovídající tomuto dotazu?
Přihlásit se k odběru RSS.